2 research outputs found

    Active Learning for Deep Neural Networks on Edge Devices

    Full text link
    When dealing with deep neural network (DNN) applications on edge devices, continuously updating the model is important. Although updating a model with real incoming data is ideal, using all of them is not always feasible due to limits, such as labeling and communication costs. Thus, it is necessary to filter and select the data to use for training (i.e., active learning) on the device. In this paper, we formalize a practical active learning problem for DNNs on edge devices and propose a general task-agnostic framework to tackle this problem, which reduces it to a stream submodular maximization. This framework is light enough to be run with low computational resources, yet provides solutions whose quality is theoretically guaranteed thanks to the submodular property. Through this framework, we can configure data selection criteria flexibly, including using methods proposed in previous active learning studies. We evaluate our approach on both classification and object detection tasks in a practical setting to simulate a real-life scenario. The results of our study show that the proposed framework outperforms all other methods in both tasks, while running at a practical speed on real devices

    Bandits Help Simulated Annealing to Complete a Maximin Latin Hypercube Design

    No full text
    International audienceSimulated Annealing (SA) is commonly considered as an efficient method to construct Maximin Latin Hypercube Designs (LHDs) which are widely employed for Experimental Design. The Maximin LHD construction problem may be generalized to the Maximin LHD completion problem in an instance of which the measurements have already been taken at certain points. The construction may then be seen as a particular case of completion with no points given in advance.As the Maximin LHD completion was proved NP-complete and inapproximable with a constant factor, the choice of SA to treat it shows itself naturally. The SA performance varies greatly depending on the mutation used. The completion problem is difficult because its nature changes when the number of given points varies. For a few fixed points, the completion behaves similarly to the construction problem. In the opposite situation, numerous fixed points restrain the search space considerably and a different mutation is appropriate. A phase transition exists between these extreme cases.We thus provide SA with a mechanism which selects an appropriate mutation. Our approach is based on the observation that the choice of a mutation can be seen as a bandit problem. It has to cope with changes in the environment, which evolves together with the thermal descent. The results obtained prove that the bandit-driven SA adapts itself on the fly to the completion problem nature. We believe that other parametrized problems, where SA can be employed, may significantly benefit from the use of a decision-making algorithm which selects the appropriate mutation
    corecore